-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add resource journal #6586
add resource journal #6586
Conversation
5e12001
to
3ff612f
Compare
I added a quick |
Yeah, let me see if I can generalize the |
3ff612f
to
e047195
Compare
Did we want to add this to 0.71 as an experimental feature? |
I was halfway through reviewing it and had a few commits that abstract the current Python Have we estimated what the size of the in-memory history of all events would be like on something like Elcap with a long uptime? |
That might be worrisome actually, now that you mention it. Perhaps we'll need some way to cull non-persistent events (e.g. other than drain/undrain) that are older than a threshold. It might be a bit too last minute to add that here, sadly. |
I'm not sure there's a rush to add this as experimental because I'm not sure there's an immediate use case without the Python consumer class (unless I'm forgetting something!) |
I think this is a priority for @kkier but there is little advantage in putting this in as is - in fact it may cause problems. Let's try to deliver a good first step in the next release. |
Just confirming - this is a priority for me (and the people I report to on both sides of the aisle, as it were), but if it ends up in an off-cycle release that I have to manually distribute, that's a thing I can deal with. |
To be clear, although it's not that hard on our end to produce one-off tags and packages, this particular change modifies a core service and would require downtime to distribute. |
One thing I've noticed in my testing is that canceling the journal RPC with Of course it is possible I've made a mistake in testing. |
100% understood, it is what it is. |
It should. (if spelled correctly :-) However I didn't add a test for that so I should verify. |
I stumbled across the problem I think: diff --git a/src/modules/resource/reslog.c b/src/modules/resource/reslog.c
index 61aae1218..513d54cc6 100644
--- a/src/modules/resource/reslog.c
+++ b/src/modules/resource/reslog.c
@@ -328,7 +328,7 @@ static void journal_cb (flux_t *h,
errstr = "journal requires streaming RPC flag";
goto error;
}
- if (send_backlog (reslog, msg))
+ if (!send_backlog (reslog, msg))
return;
if (flux_msglist_append (reslog->consumers, msg) < 0)
goto error;
|
Aww I tested that. Must have broken it with a last minute change. I need to add a test too :-( |
Problem: the reslog class has no way to access the resource inventory, but it will be useful to send journal consumers a copy of R when the resource-define event is emitted. Pass the resource_ctx to reslog_create() instead of just the flux_t handle. Adjust internal uses of the flux_t handle to get it via reslog->ctx->h instead of reslog->h.
Problem: the full resource eventlog, including online/offline events that are not committed to the KVS, may need to be monitored. Keep events in a json array in memory, including the events that were read from the KVS at startup, if any. Filter out any historical resource-define events. These are meant for synchronization on the availability of R and that only pertains to the current instance.
Problem: there is no way to observe the journal in real time, with non-persistent online/offline events included. Add a resource.journal RPC with protocol similar to the job manager journal.
Problem: a resource journal consumer will get online/offline events before knowing the size of the instance or the hostname mapping. Post a 'restart' event when the resource module is loaded with the following keys: ranks An idset containing all valid ranks: 0 to size-1 online An idset containing any ranks that are initially online. This is normally empty except when starting with monitor-force-up in test. nodelist Contents of the hostlist broker attribute This event is not made persistent in the KVS resource.eventlog.
Problem: there is no convenient tool for accessing the resource journal. Add flux resource eventlog [--follow] [--wait=EVENT].
Problem: there are no tests for the resource journal or the flux resource eventlog command. Add a sharness test for this purpose.
Problem: flux resource eventlog has no documentation. Add an entry to the man page.
e047195
to
6769b2e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM!
Thanks! I'll set MWP. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #6586 +/- ##
==========================================
- Coverage 79.53% 79.50% -0.04%
==========================================
Files 531 531
Lines 88213 88354 +141
==========================================
+ Hits 70160 70242 +82
- Misses 18053 18112 +59
|
This adds a resource journal streaming RPC similar the one offered by the job manager. Just a first cut at this point.
This doesn't change what's posted to the persistent
resource.eventlog
in the KVS but does add one new event calledrestart
that's only for journal consumption. It provides a baseline for mapping execution targets to hostnames in the current instance, and sets the initialonline
set after a restart.Unlike the job manager journal, this doesn't have as much volume to deal with so no options for event filtering or skipping historical data are provided as yet.
flux resource eventlog
can be used to dump and optionally follow this log. This is not currently polished at all - it just dumps the events in JSON form, one per line.For more detail on what's in this log and how the journal is formatted, see the proposed RFC: